Biblio

Found 2246 results

Filters: First Letter Of Last Name is J  [Clear All Filters]
2017-05-16
Jang, Min-Hee, Kim, Sang-Wook, Ha, Jiwoon.  2016.  Effectiveness of Reverse Edges and Uncertainty in PIN-TRUST for Trust Prediction. Proceedings of the Sixth International Conference on Emerging Databases: Technologies, Applications, and Theory. :81–85.

Recently, PIN-TRUST, a method to predict future trust relationships between users is proposed. PIN-TRUST out-performs existing trust prediction methods by exploiting all types of interactions between users and the reciprocation of ones. In this paper, we validate whether its consideration on the reciprocation of interactions is really effective in trust prediction. Furthermore, we consider a new concept, the "uncertainty" of untrustworthy users that is devised to reflect the difficulty on modeling the activities of untrustworthy users in PIN-TRUST. Then, we also validate the effectiveness this uncertainty concepts. Through the validation, we reveal that the consideration of the reciprocation of interactions is effective for trust prediction with PIN-TRUST, and it is necessary to regard the uncertainty of untrustworthy users same as that of other users.

2017-06-27
Jafarian, Jafar Haadi, Niakanlahiji, Amirreza, Al-Shaer, Ehab, Duan, Qi.  2016.  Multi-dimensional Host Identity Anonymization for Defeating Skilled Attackers. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :47–58.

While existing proactive-based paradigms such as address mutation are effective in slowing down reconnaissance by naive attackers, they are ineffective against skilled human attackers. In this paper, we analytically show that the goal of defeating reconnaissance by skilled human attackers is only achievable by an integration of five defensive dimensions: (1) mutating host addresses, (2) mutating host fingerprints, (3) anonymizing host fingerprints, (4) deploying high-fidelity honeypots with context-aware fingerprints, and (5) deploying context-aware content on those honeypots. Using a novel class of honeypots, referred to as proxy honeypots (high-interaction honeypots with customizable fingerprints), we propose a proactive defense model, called (HIDE), that constantly mutates addresses and fingerprints of network hosts and proxy honeypots in a manner that maximally anonymizes identity of network hosts. The objective is to make a host untraceable over time by not letting even skilled attackers reuse discovered attributes of a host in previous scanning, including its addresses and fingerprint, to identify that host again. The mutations are generated through formal definition and modeling the problem. Using a red teaming evaluation with a group of white-hat hackers, we evaluated our five-dimensional defense model and compared its effectiveness with alternative and competing scenarios. These experiments as well as our analytical evaluation show that by anonymizing all identifying attributes of a host/honeypot over time, HIDE is able to significantly complicate reconnaissance, even for highly skilled human attackers.

2017-10-19
Jardine, William, Frey, Sylvain, Green, Benjamin, Rashid, Awais.  2016.  SENAMI: Selective Non-Invasive Active Monitoring for ICS Intrusion Detection. Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy. :23–34.
Current intrusion detection systems (IDS) for industrial control systems (ICS) mostly involve the retrofitting of conventional network IDSs, such as SNORT. Such an approach is prone to missing highly targeted and specific attacks against ICS. Where ICS-specific approaches exist, they often rely on passive network monitoring techniques, offering a low cost solution, and avoiding any computational overhead arising from actively polling ICS devices. However, the use of passive approaches alone could fail in the detection of attacks that alter the behaviour of ICS devices (as was the case in Stuxnet). Where active solutions exist, they can be resource-intensive, posing the risk of overloading legacy devices which are commonplace in ICSs. In this paper we aim to overcome these challenges through the combination of a passive network monitoring approach, and selective active monitoring based on attack vectors specific to an ICS context. We present the implementation of our IDS, SENAMI, for use with Siemens S7 devices. We evaluate the effectiveness of SENAMI in a comprehensive testbed environment, demonstrating validity of the proposed approach through the detection of purely passive attacks at a rate of 99%, and active value tampering attacks at a rate of 81-93%. Crucially, we reach recall values greater than 0.96, indicating few attack scenarios generating false negatives.
2017-05-16
Depping, Ansgar E., Mandryk, Regan L., Johanson, Colby, Bowey, Jason T., Thomson, Shelby C..  2016.  Trust Me: Social Games Are Better Than Social Icebreakers at Building Trust. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. :116–129.

Interpersonal trust is one of the key components of efficient teamwork. Research suggests two main approaches for trust formation: personal information exchange (e.g., social icebreakers), and creating a context of risk and interdependence (e.g., trust falls). However, because these strategies are difficult to implement in an online setting, trust is more difficult to achieve and preserve in distributed teams. In this paper, we argue that games are an optimal environment for trust formation because they can simulate both risk and interdependence. Results of our online experiment show that a social game can be more effective than a social task at fostering interpersonal trust. Furthermore, trust formation through the game is reliable, but trust depends on several contingencies in the social task. Our work suggests that gameplay interactions do not merely promote impoverished versions of the rich ties formed through conversation; but rather engender genuine social bonds. \textbackslash

2017-09-11
Jia, Yaoqi, Chua, Zheng Leong, Hu, Hong, Chen, Shuo, Saxena, Prateek, Liang, Zhenkai.  2016.  "The Web/Local" Boundary Is Fuzzy: A Security Study of Chrome's Process-based Sandboxing. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :791–804.

Process-based isolation, suggested by several research prototypes, is a cornerstone of modern browser security architectures. Google Chrome is the first commercial browser that adopts this architecture. Unlike several research prototypes, Chrome's process-based design does not isolate different web origins, but primarily promises to protect "the local system" from "the web". However, as billions of users now use web-based cloud services (e.g., Dropbox and Google Drive), which are integrated into the local system, the premise that browsers can effectively isolate the web from the local system has become questionable. In this paper, we argue that, if the process-based isolation disregards the same-origin policy as one of its goals, then its promise of maintaining the "web/local system (local)" separation is doubtful. Specifically, we show that existing memory vulnerabilities in Chrome's renderer can be used as a stepping-stone to drop executables/scripts in the local file system, install unwanted applications and misuse system sensors. These attacks are purely data-oriented and do not alter any control flow or import foreign code. Thus, such attacks bypass binary-level protection mechanisms, including ASLR and in-memory partitioning. Finally, we discuss various full defenses and present a possible way to mitigate the attacks presented.

2018-05-16
2016-10-03
Nuthan Munaiah, Andrew Meneely, Benjamin Short, Ryan Wilson, Jordan Tice.  2016.  Are Intrusion Detection Studies Evaluated Consistently? A Systematic Literature Review :18.

Cyberinfrastructure is increasingly becoming target of a wide spectrum of attacks from Denial of
Service to large-scale defacement of the digital presence of an organization. Intrusion Detection System
(IDSs) provide administrators a defensive edge over intruders lodging such malicious attacks. However,
with the sheer number of different IDSs available, one has to objectively assess the capabilities of different
IDSs to select an IDS that meets specific organizational requirements. A prerequisite to enable such
an objective assessment is the implicit comparability of IDS literature. In this study, we review IDS
literature to understand the implicit comparability of IDS literature from the perspective of metrics
used in the empirical evaluation of the IDS. We identified 22 metrics commonly used in the empirical
evaluation of IDS and constructed search terms to retrieve papers that mention the metric. We manually
reviewed a sample of 495 papers and found 159 of them to be relevant. We then estimated the number
of relevant papers in the entire set of papers retrieved from IEEE. We found that, in the evaluation
of IDSs, multiple different metrics are used and the trade-off between metrics is rarely considered. In
a retrospective analysis of the IDS literature, we found the the evaluation criteria has been improving
over time, albeit marginally. The inconsistencies in the use of evaluation metrics may not enable direct
comparison of one IDS to another.

2018-05-17
J. Kaur, N. R. Chaudhuri.  2016.  Challenges of model reduction in modern power grid with wind generation. 2016 North American Power Symposium (NAPS). :1-6.
2018-05-27
A. Cherukuri, E. Mallada, S. Low, J. Cortes.  2016.  The role of strong convexity-concavity in the convergence and robustness of the saddle-point dynamics. 54th Annual Allerton Conf. on Communication, Control, and Computing (Allerton). :504-510.
2016-09-16
Robert Zager, John Zager.  2016.  Why We Will Continue to Lose the Cyber War. Mad Scientist Conference 2016.

The United States is losing the cyberwar. We are losing the cyberwar because cyber defenses apply the wrong philosophy to the wrong operating environment. In order to be effective, future cyber defenses must be viewed in the context of an engagement between human adversaries.

2017-12-27
Jallouli, O., Abutaha, M., Assad, S. E., Chetto, M., Queudet, A., Deforges, O..  2016.  Comparative study of two pseudo chaotic number generators for securing the IoT. 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :1340–1344.

The extremely rapid development of the Internet of Things brings growing attention to the information security issue. Realization of cryptographically strong pseudo random number generators (PRNGs), is crucial in securing sensitive data. They play an important role in cryptography and in network security applications. In this paper, we realize a comparative study of two pseudo chaotic number generators (PCNGs). The First pseudo chaotic number generator (PCNG1) is based on two nonlinear recursive filters of order one using a Skew Tent map (STmap) and a Piece-Wise Linear Chaotic map (PWLCmap) as non linear functions. The second pseudo chaotic number generator (PCNG2) consists of four coupled chaotic maps, namely: PWLCmaps, STmap, Logistic map by means a binary diffusion matrix [D]. A comparative analysis of the performance in terms of computation time (Generation time, Bit rate and Number of needed cycles to generate one byte) and security of the two PCNGs is carried out.

2018-01-10
Li, W., Ji, J., Zhang, G., Zhang, W..  2016.  Cross-layer security based on optical CDMA and algorithmic cryptography. 2016 IEEE Optoelectronics Global Conference (OGC). :1–2.

In this paper, we introduce an optical network with cross-layer security, which can enhance security performance. In the transmitter, the user's data is encrypted at first. After that, based on optical encoding, physical layer encryption is implemented. In the receiver, after the corresponding optical decoding process, decryption algorithm is used to restore user's data. In this paper, the security performance has been evaluated quantitatively.

2017-10-27
Aron Laszka, Mingyi Zhao, Jens Grossklags.  2016.  Banishing Misaligned Incentives for Validating Reports in Bug-Bounty Platforms. 21st European Symposium on Research in Computer Security (ESORICS).
Bug-bounty programs have the potential to harvest the efforts and diverse knowledge of thousands of white hat hackers. As a consequence, they are becoming increasingly popular as a key part of the security culture of organizations. However, bug-bounty programs can be riddled with myriads of invalid vulnerability-report submissions, which are partially the result of misaligned incentives between white hats and organizations. To further improve the effectiveness of bug-bounty programs, we introduce a theoretical model for evaluating approaches for reducing the number of invalid reports. We develop an economic framework and investigate the strengths and weaknesses of existing canonical approaches for effectively incentivizing higher validation efforts by white hats. Finally, we introduce a novel approach, which may improve effi- ciency by enabling different white hats to exert validation effort at their individually optimal levels.
2017-05-19
Estes, Tanya, Finocchiaro, James, Blair, Jean, Robison, Johnathan, Dalme, Justin, Emana, Michael, Jenkins, Luke, Sobiesk, Edward.  2016.  A Capstone Design Project for Teaching Cybersecurity to Non-technical Users. Proceedings of the 17th Annual Conference on Information Technology Education. :142–147.

This paper presents a multi-year undergraduate computing capstone project that holistically contributes to the development of cybersecurity knowledge and skills in non-computing high school and college students. We describe the student-built Vulnerable Web Server application, which is a system that packages instructional materials and pre-built virtual machines to provide lessons on cybersecurity to non-technical students. The Vulnerable Web Server learning materials have been piloted at several high schools and are now integrated into multiple security lessons in an intermediate, general education information technology course at the United States Military Academy. Our paper interweaves a description of the Vulnerable Web Server materials with the senior capstone design process that allowed it to be built by undergraduate information technology and computer science students, resulting in a valuable capstone learning experience. Throughout the paper, a call is made for greater emphasis on educating the non-technical user.

2017-08-02
Wuxia Jin, Ting Liu, Yu Qu, Jianlei Chi, Di Cui, Qinghua Zheng.  2016.  Dynamic cohesion measurement for distributed system.

Instead of developing single-server software system for the powerful computers, the software is turning from large single-server to multi-server system such as distributed system. This change introduces a new challenge for the software quality measurement, since the current software analysis methods for single-server software could not observe and assess the correlation among the components on different nodes. In this paper, a new dynamic cohesion approach is proposed for distributed system. We extend Calling Network model for distributed system by differentiating methods of components deployed on different nodes. Two new cohesion metrics are proposed to describe the correlation at component level, by extending the cohesion metric of single-server software system. The experiments, conducted on a distributed systems-Netflix RSS Reader, present how to trace the various system functions accomplished on three nodes, how to abstract dynamic behaviors using our model among different nodes and how to evaluate the software cohesion on distributed system.

2017-08-18
Sion, Laurens, Van Landuyt, Dimitri, Yskout, Koen, Joosen, Wouter.  2016.  Towards Systematically Addressing Security Variability in Software Product Lines. Proceedings of the 20th International Systems and Software Product Line Conference. :342–343.

With the increasingly pervasive role of software in society, security is becoming an important quality concern, emphasizing security by design, but it requires intensive specialization. Security in families of systems is even harder, as diverse variants of security solutions must be considered, with even different security goals per product. Furthermore, security is not a static object but a moving target, adding variability. For this, an approach to systematically address security concerns in software product lines is needed. It should consider security separate from other variability dimensions. The main challenges to realize this are: (i) expressing security and its variability, (ii) selecting the right solution, (iii) properly instantiating a solution, and (iv) verifying and validating it. In this paper, we present our research agenda towards addressing the aforementioned challenges.

2017-10-27
Devendra Shelar, Jairo Giraldo, Saurabh Amin.  2015.  A Distributed Strategy for Electricity Distribution Network Control in the face of DER Compromises. IEEE CDC 2015.
We focus on the question of distributed control of electricity distribution networks in the face of security attacks to Distributed Energy Resources (DERs). Our attack model includes strategic manipulation of DER set-points by an external hacker to induce a sudden compromise of a subset of DERs connected to the network. We approach the distributed control design problem in two stages. In the first stage, we model the attacker-defender interaction as a Stackelberg game. The attacker (leader) disconnects a subset of DERs by sending them wrong set-point signals. The distribution utility (follower) response includes Volt-VAR control of non-compromised DERs and load control. The objective of the attacker (resp. defender) is to maximize (resp. minimize) the weighted sum of the total cost due to loss of frequency regulation and the cost due to loss of voltage regulation. In the second stage, we propose a distributed control (defender response) strategy for each local controller such that, if sudden supply-demand mismatch is detected (for example, due to DER compromises), the local controllers automatically respond based on their respective observations of local fluctuations in voltage and frequency. This strategy aims to achieve diversification of DER functions in the sense that each uncompromised DER node either contributes to voltage regulation (by contributing reactive power) or to frequency regulation (by contributing active power). We illustrate the effectiveness of this control strategy on a benchmark network.
2017-03-07
Johnson, R., Kiourtis, N., Stavrou, A., Sritapan, V..  2015.  Analysis of content copyright infringement in mobile application markets. 2015 APWG Symposium on Electronic Crime Research (eCrime). :1–10.

As mobile devices increasingly become bigger in terms of display and reliable in delivering paid entertainment and video content, we also see a rise in the presence of mobile applications that attempt to profit by streaming pirated content to unsuspected end-users. These applications are both paid and free and in the case of free applications, the source of funding appears to be advertisements that are displayed while the content is streamed to the device. In this paper, we assess the extent of content copyright infringement for mobile markets that span multiple platforms (iOS, Android, and Windows Mobile) and cover both official and unofficial mobile markets located across the world. Using a set of search keywords that point to titles of paid streaming content, we discovered 8,592 Android, 5,550 iOS, and 3,910 Windows mobile applications that matched our search criteria. Out of those applications, hundreds had links to either locally or remotely stored pirated content and were not developed, endorsed, or, in many cases, known to the owners of the copyrighted contents. We also revealed the network locations of 856,717 Uniform Resource Locators (URLs) pointing to back-end servers and cyber-lockers used to communicate the pirated content to the mobile application.

2017-03-08
Jianqiang, Gu, Shue, Mei, Weijun, Zhong.  2015.  Analyzing information security investment in networked supply chains. 2015 International Conference on Logistics, Informatics and Service Sciences (LISS). :1–5.

Security breaches and attacks are becoming a more critical and, simultaneously, a challenging problems for many firms in networked supply chains. A game theory-based model is developed to investigate how interdependent feature of information security risk influence the optimal strategy of firms to invest in information security. The equilibrium levels of information security investment under non-cooperative game condition are compared with socially optimal solutions. The results show that the infectious risks often induce firms to invest inefficiently whereas trust risks lead to overinvest in information security. We also find that firm's investment may not necessarily monotonous changes with infectious risks and trust risks in a centralized case. Furthermore, relative to the socially efficient level, firms facing infectious risks may invest excessively depending on whether trust risks is large enough.

2016-04-11
Brad Miller, Alex Kantchelian, Michael Carl Tschantz, Sadia Afroz, Rekha Bachwani, Riyaz Faizullabhoy, Ling Huang, Vaishaal Shankar, Tony Wu, George Yiu et al..  2015.  Back to the Future: Malware Detection with Temporally Consistent Labels. CoRR. abs/1510.07338

The malware detection arms race involves constant change: malware changes to evade detection and labels change as detection mechanisms react. Recognizing that malware changes over time, prior work has enforced temporally consistent samples by requiring that training binaries predate evaluation binaries. We present temporally consistent labels, requiring that training labels also predate evaluation binaries since training labels collected after evaluation binaries constitute label knowledge from the future. Using a dataset containing 1.1 million binaries from over 2.5 years, we show that enforcing temporal label consistency decreases detection from 91% to 72% at a 0.5% false positive rate compared to temporal samples alone.

The impact of temporal labeling demonstrates the potential of improved labels to increase detection results. Hence, we present a detector capable of selecting binaries for submission to an expert labeler for review. At a 0.5% false positive rate, our detector achieves a 72% true positive rate without an expert, which increases to 77% and 89% with 10 and 80 expert queries daily, respectively. Additionally, we detect 42% of malicious binaries initially undetected by all 32 antivirus vendors from VirusTotal used in our evaluation. For evaluation at scale, we simulate the human expert labeler and show that our approach is robust against expert labeling errors. Our novel contributions include a scalable malware detector integrating manual review with machine learning and the examination of temporal label consistency

2015-11-11
Kantchelian, Alex, Tschantz, Michael Carl, Afroz, Sadia, Miller, Brad, Shankar, Vaishaal, Bachwani, Rekha, Joseph, Anthony D., Tygar, J. D..  2015.  Better Malware Ground Truth: Techniques for Weighting Anti-Virus Vendor Labels. Proceedings of the 8th ACM Workshop on Artificial Intelligence and Security. :45–56.

We examine the problem of aggregating the results of multiple anti-virus (AV) vendors' detectors into a single authoritative ground-truth label for every binary. To do so, we adapt a well-known generative Bayesian model that postulates the existence of a hidden ground truth upon which the AV labels depend. We use training based on Expectation Maximization for this fully unsupervised technique. We evaluate our method using 279,327 distinct binaries from VirusTotal, each of which appeared for the rst time between January 2012 and June 2014.

Our evaluation shows that our statistical model is consistently more accurate at predicting the future-derived ground truth than all unweighted rules of the form \k out of n" AV detections. In addition, we evaluate the scenario where partial ground truth is available for model building. We train a logistic regression predictor on the partial label information. Our results show that as few as a 100 randomly selected training instances with ground truth are enough to achieve 80% true positive rate for 0.1% false positive rate. In comparison, the best unweighted threshold rule provides only 60% true positive rate at the same false positive rate.

2018-03-29
2015-10-23
2018-05-27
2018-05-16
E. Nozari, P. Tallapragada, J. Cortes.  2015.  Differentially private average consensus with optimal noise selection. IFAC-PapersOnLine. 48:203-208.

This paper studies the problem of privacy-preserving average consensus in multi-agent systems. The network objective is to compute the average of the initial agent states while keeping these values differentially private against an adversary that has access to all inter-agent messages. We establish an impossibility result that shows that exact average consensus cannot be achieved by any algorithm that preserves differential privacy. This result motives our design of a differentially private discrete-time distributed algorithm that corrupts messages with Laplacian noise and is guaranteed to achieve average consensus in expectation. We examine how to optimally select the noise parameters in order to minimize the variance of the network convergence point for a desired level of privacy.

it IFAC Workshop on Distributed Estimation and Control in Networked Systems}, Philadelphia, PA