Biblio
Recently, PIN-TRUST, a method to predict future trust relationships between users is proposed. PIN-TRUST out-performs existing trust prediction methods by exploiting all types of interactions between users and the reciprocation of ones. In this paper, we validate whether its consideration on the reciprocation of interactions is really effective in trust prediction. Furthermore, we consider a new concept, the "uncertainty" of untrustworthy users that is devised to reflect the difficulty on modeling the activities of untrustworthy users in PIN-TRUST. Then, we also validate the effectiveness this uncertainty concepts. Through the validation, we reveal that the consideration of the reciprocation of interactions is effective for trust prediction with PIN-TRUST, and it is necessary to regard the uncertainty of untrustworthy users same as that of other users.
While existing proactive-based paradigms such as address mutation are effective in slowing down reconnaissance by naive attackers, they are ineffective against skilled human attackers. In this paper, we analytically show that the goal of defeating reconnaissance by skilled human attackers is only achievable by an integration of five defensive dimensions: (1) mutating host addresses, (2) mutating host fingerprints, (3) anonymizing host fingerprints, (4) deploying high-fidelity honeypots with context-aware fingerprints, and (5) deploying context-aware content on those honeypots. Using a novel class of honeypots, referred to as proxy honeypots (high-interaction honeypots with customizable fingerprints), we propose a proactive defense model, called (HIDE), that constantly mutates addresses and fingerprints of network hosts and proxy honeypots in a manner that maximally anonymizes identity of network hosts. The objective is to make a host untraceable over time by not letting even skilled attackers reuse discovered attributes of a host in previous scanning, including its addresses and fingerprint, to identify that host again. The mutations are generated through formal definition and modeling the problem. Using a red teaming evaluation with a group of white-hat hackers, we evaluated our five-dimensional defense model and compared its effectiveness with alternative and competing scenarios. These experiments as well as our analytical evaluation show that by anonymizing all identifying attributes of a host/honeypot over time, HIDE is able to significantly complicate reconnaissance, even for highly skilled human attackers.
Interpersonal trust is one of the key components of efficient teamwork. Research suggests two main approaches for trust formation: personal information exchange (e.g., social icebreakers), and creating a context of risk and interdependence (e.g., trust falls). However, because these strategies are difficult to implement in an online setting, trust is more difficult to achieve and preserve in distributed teams. In this paper, we argue that games are an optimal environment for trust formation because they can simulate both risk and interdependence. Results of our online experiment show that a social game can be more effective than a social task at fostering interpersonal trust. Furthermore, trust formation through the game is reliable, but trust depends on several contingencies in the social task. Our work suggests that gameplay interactions do not merely promote impoverished versions of the rich ties formed through conversation; but rather engender genuine social bonds. \textbackslash
Process-based isolation, suggested by several research prototypes, is a cornerstone of modern browser security architectures. Google Chrome is the first commercial browser that adopts this architecture. Unlike several research prototypes, Chrome's process-based design does not isolate different web origins, but primarily promises to protect "the local system" from "the web". However, as billions of users now use web-based cloud services (e.g., Dropbox and Google Drive), which are integrated into the local system, the premise that browsers can effectively isolate the web from the local system has become questionable. In this paper, we argue that, if the process-based isolation disregards the same-origin policy as one of its goals, then its promise of maintaining the "web/local system (local)" separation is doubtful. Specifically, we show that existing memory vulnerabilities in Chrome's renderer can be used as a stepping-stone to drop executables/scripts in the local file system, install unwanted applications and misuse system sensors. These attacks are purely data-oriented and do not alter any control flow or import foreign code. Thus, such attacks bypass binary-level protection mechanisms, including ASLR and in-memory partitioning. Finally, we discuss various full defenses and present a possible way to mitigate the attacks presented.
Cyberinfrastructure is increasingly becoming target of a wide spectrum of attacks from Denial of
Service to large-scale defacement of the digital presence of an organization. Intrusion Detection System
(IDSs) provide administrators a defensive edge over intruders lodging such malicious attacks. However,
with the sheer number of different IDSs available, one has to objectively assess the capabilities of different
IDSs to select an IDS that meets specific organizational requirements. A prerequisite to enable such
an objective assessment is the implicit comparability of IDS literature. In this study, we review IDS
literature to understand the implicit comparability of IDS literature from the perspective of metrics
used in the empirical evaluation of the IDS. We identified 22 metrics commonly used in the empirical
evaluation of IDS and constructed search terms to retrieve papers that mention the metric. We manually
reviewed a sample of 495 papers and found 159 of them to be relevant. We then estimated the number
of relevant papers in the entire set of papers retrieved from IEEE. We found that, in the evaluation
of IDSs, multiple different metrics are used and the trade-off between metrics is rarely considered. In
a retrospective analysis of the IDS literature, we found the the evaluation criteria has been improving
over time, albeit marginally. The inconsistencies in the use of evaluation metrics may not enable direct
comparison of one IDS to another.
The United States is losing the cyberwar. We are losing the cyberwar because cyber defenses apply the wrong philosophy to the wrong operating environment. In order to be effective, future cyber defenses must be viewed in the context of an engagement between human adversaries.
The extremely rapid development of the Internet of Things brings growing attention to the information security issue. Realization of cryptographically strong pseudo random number generators (PRNGs), is crucial in securing sensitive data. They play an important role in cryptography and in network security applications. In this paper, we realize a comparative study of two pseudo chaotic number generators (PCNGs). The First pseudo chaotic number generator (PCNG1) is based on two nonlinear recursive filters of order one using a Skew Tent map (STmap) and a Piece-Wise Linear Chaotic map (PWLCmap) as non linear functions. The second pseudo chaotic number generator (PCNG2) consists of four coupled chaotic maps, namely: PWLCmaps, STmap, Logistic map by means a binary diffusion matrix [D]. A comparative analysis of the performance in terms of computation time (Generation time, Bit rate and Number of needed cycles to generate one byte) and security of the two PCNGs is carried out.
In this paper, we introduce an optical network with cross-layer security, which can enhance security performance. In the transmitter, the user's data is encrypted at first. After that, based on optical encoding, physical layer encryption is implemented. In the receiver, after the corresponding optical decoding process, decryption algorithm is used to restore user's data. In this paper, the security performance has been evaluated quantitatively.
This paper presents a multi-year undergraduate computing capstone project that holistically contributes to the development of cybersecurity knowledge and skills in non-computing high school and college students. We describe the student-built Vulnerable Web Server application, which is a system that packages instructional materials and pre-built virtual machines to provide lessons on cybersecurity to non-technical students. The Vulnerable Web Server learning materials have been piloted at several high schools and are now integrated into multiple security lessons in an intermediate, general education information technology course at the United States Military Academy. Our paper interweaves a description of the Vulnerable Web Server materials with the senior capstone design process that allowed it to be built by undergraduate information technology and computer science students, resulting in a valuable capstone learning experience. Throughout the paper, a call is made for greater emphasis on educating the non-technical user.
Instead of developing single-server software system for the powerful computers, the software is turning from large single-server to multi-server system such as distributed system. This change introduces a new challenge for the software quality measurement, since the current software analysis methods for single-server software could not observe and assess the correlation among the components on different nodes. In this paper, a new dynamic cohesion approach is proposed for distributed system. We extend Calling Network model for distributed system by differentiating methods of components deployed on different nodes. Two new cohesion metrics are proposed to describe the correlation at component level, by extending the cohesion metric of single-server software system. The experiments, conducted on a distributed systems-Netflix RSS Reader, present how to trace the various system functions accomplished on three nodes, how to abstract dynamic behaviors using our model among different nodes and how to evaluate the software cohesion on distributed system.
With the increasingly pervasive role of software in society, security is becoming an important quality concern, emphasizing security by design, but it requires intensive specialization. Security in families of systems is even harder, as diverse variants of security solutions must be considered, with even different security goals per product. Furthermore, security is not a static object but a moving target, adding variability. For this, an approach to systematically address security concerns in software product lines is needed. It should consider security separate from other variability dimensions. The main challenges to realize this are: (i) expressing security and its variability, (ii) selecting the right solution, (iii) properly instantiating a solution, and (iv) verifying and validating it. In this paper, we present our research agenda towards addressing the aforementioned challenges.
As mobile devices increasingly become bigger in terms of display and reliable in delivering paid entertainment and video content, we also see a rise in the presence of mobile applications that attempt to profit by streaming pirated content to unsuspected end-users. These applications are both paid and free and in the case of free applications, the source of funding appears to be advertisements that are displayed while the content is streamed to the device. In this paper, we assess the extent of content copyright infringement for mobile markets that span multiple platforms (iOS, Android, and Windows Mobile) and cover both official and unofficial mobile markets located across the world. Using a set of search keywords that point to titles of paid streaming content, we discovered 8,592 Android, 5,550 iOS, and 3,910 Windows mobile applications that matched our search criteria. Out of those applications, hundreds had links to either locally or remotely stored pirated content and were not developed, endorsed, or, in many cases, known to the owners of the copyrighted contents. We also revealed the network locations of 856,717 Uniform Resource Locators (URLs) pointing to back-end servers and cyber-lockers used to communicate the pirated content to the mobile application.
Security breaches and attacks are becoming a more critical and, simultaneously, a challenging problems for many firms in networked supply chains. A game theory-based model is developed to investigate how interdependent feature of information security risk influence the optimal strategy of firms to invest in information security. The equilibrium levels of information security investment under non-cooperative game condition are compared with socially optimal solutions. The results show that the infectious risks often induce firms to invest inefficiently whereas trust risks lead to overinvest in information security. We also find that firm's investment may not necessarily monotonous changes with infectious risks and trust risks in a centralized case. Furthermore, relative to the socially efficient level, firms facing infectious risks may invest excessively depending on whether trust risks is large enough.
The malware detection arms race involves constant change: malware changes to evade detection and labels change as detection mechanisms react. Recognizing that malware changes over time, prior work has enforced temporally consistent samples by requiring that training binaries predate evaluation binaries. We present temporally consistent labels, requiring that training labels also predate evaluation binaries since training labels collected after evaluation binaries constitute label knowledge from the future. Using a dataset containing 1.1 million binaries from over 2.5 years, we show that enforcing temporal label consistency decreases detection from 91% to 72% at a 0.5% false positive rate compared to temporal samples alone.
The impact of temporal labeling demonstrates the potential of improved labels to increase detection results. Hence, we present a detector capable of selecting binaries for submission to an expert labeler for review. At a 0.5% false positive rate, our detector achieves a 72% true positive rate without an expert, which increases to 77% and 89% with 10 and 80 expert queries daily, respectively. Additionally, we detect 42% of malicious binaries initially undetected by all 32 antivirus vendors from VirusTotal used in our evaluation. For evaluation at scale, we simulate the human expert labeler and show that our approach is robust against expert labeling errors. Our novel contributions include a scalable malware detector integrating manual review with machine learning and the examination of temporal label consistency
We examine the problem of aggregating the results of multiple anti-virus (AV) vendors' detectors into a single authoritative ground-truth label for every binary. To do so, we adapt a well-known generative Bayesian model that postulates the existence of a hidden ground truth upon which the AV labels depend. We use training based on Expectation Maximization for this fully unsupervised technique. We evaluate our method using 279,327 distinct binaries from VirusTotal, each of which appeared for the rst time between January 2012 and June 2014.
Our evaluation shows that our statistical model is consistently more accurate at predicting the future-derived ground truth than all unweighted rules of the form \k out of n" AV detections. In addition, we evaluate the scenario where partial ground truth is available for model building. We train a logistic regression predictor on the partial label information. Our results show that as few as a 100 randomly selected training instances with ground truth are enough to achieve 80% true positive rate for 0.1% false positive rate. In comparison, the best unweighted threshold rule provides only 60% true positive rate at the same false positive rate.
This paper studies the problem of privacy-preserving average consensus in multi-agent systems. The network objective is to compute the average of the initial agent states while keeping these values differentially private against an adversary that has access to all inter-agent messages. We establish an impossibility result that shows that exact average consensus cannot be achieved by any algorithm that preserves differential privacy. This result motives our design of a differentially private discrete-time distributed algorithm that corrupts messages with Laplacian noise and is guaranteed to achieve average consensus in expectation. We examine how to optimally select the noise parameters in order to minimize the variance of the network convergence point for a desired level of privacy.
it IFAC Workshop on Distributed Estimation and Control in Networked Systems}, Philadelphia, PA