Biblio
For authenticating time critical broadcast messages, IEEE 1609.2 security standard for Vehicular Ad hoc Networks (VANETs) suggests the use of secure Elliptic Curve Digital Signature Algorithm (ECDSA). Since ECDSA has an expensive verification in terms of time, most commonly suggested alternate algorithms are TESLA and signature amortization. Unfortunately, these algorithms lack immediate authentication and non-repudiation. Therefore, we introduce a probabilistic verification scheme for an ECDSA-based authentication protocol. Using ns2 simulation tools, we compare the performance of all above-mentioned broadcast authentication algorithms. The results show with our proposed scheme, there is an increase in packet processed ratio over that of all the other algorithms.
Digital signatures are perhaps the most important base for authentication and trust relationships in large scale systems. More specifically, various applications of signatures provide privacy and anonymity preserving mechanisms and protocols, and these, in turn, are becoming critical (due to the recently recognized need to protect individuals according to national rules and regulations). A specific type of signatures called "signatures with efficient protocols", as introduced by Camenisch and Lysyanskaya (CL), efficiently accommodates various basic protocols and extensions like zero-knowledge proofs, signing committed messages, or re-randomizability. These are, in fact, typical operations associated with signatures used in typical anonymity and privacy-preserving scenarios. To date there are no "signatures with efficient protocols" which are based on simple assumptions and truly practical. These two properties assure us a robust primitive: First, simple assumptions are needed for ensuring that this basic primitive is mathematically robust and does not require special ad hoc assumptions that are more risky, imply less efficiency, are more tuned to the protocol itself, and are perhaps less trusted. In the other dimension, efficiency is a must given the anonymity applications of the protocol, since without proper level of efficiency the future adoption of the primitives is always questionable (in spite of their need). In this work, we present a new CL-type signature scheme that is re-randomizable under a simple, well-studied, and by now standard, assumption (SXDH). The signature is efficient (built on the recent QA-NIZK constructions), and is, by design, suitable to work in extended contexts that typify privacy settings (like anonymous credentials, group signature, and offline e-cash). We demonstrate its power by presenting practical protocols based on it.
Chang-Chen-Wang's (3,n) Secret grayscale image Sharing between n grayscale cover images method with participant Authentication and damaged pixels Repairing (SSAR) properties is analyzed; it restores the secret image from any three of the cover images used. We show that SSAR may fail, is not able fake participant recognizing, and has limited by 62.5% repairing ability. We propose SSAR (4,n) enhancement, SSAR-E, allowing 100% exact restoration of a corrupted pixel using any four of n covers, and recognizing a fake participant with the help of cryptographic hash functions with 5-bit values that allows better (vs. 4 bits) error detection. Using a special permutation with only one loop including all the secret image pixels, SSAR-E is able restoring all the secret image damaged pixels having just one correct pixel left. SSAR-E allows restoring the secret image to authorized parties only contrary to SSAR. The performance and size of cover images for SSAR-E are the same as for SSAR.
In cloud computing, computationally weak users are always willing to outsource costly computations to a cloud, and at the same time they need to check the correctness of the result provided by the cloud. Such activities motivate the occurrence of verifiable computation (VC). Recently, Parno, Raykova and Vaikuntanathan showed any VC protocol can be constructed from an attribute-based encryption (ABE) scheme for a same class of functions. In this paper, we propose two practical and efficient semi-adaptively secure key-policy attribute-based encryption (KP-ABE) schemes with constant-size ciphertexts. The semi-adaptive security requires that the adversary designates the challenge attribute set after it receives public parameters but before it issues any secret key query, which is stronger than selective security guarantee. Our first construction deals with small universe while the second one supports large universe. Both constructions employ the technique underlying the prime-order instantiation of nested dual system groups, which are based on the \$d\$-linear assumption including SXDH and DLIN assumptions. In order to evaluate the performance, we implement our ABE schemes using \$\textbackslashtextsf\Python\\$ language in Charm. Compared with previous KP-ABE schemes with constant-size ciphertexts, our constructions achieve shorter ciphertext and secret key sizes, and require low computation costs, especially under the SXDH assumption.
Recent years have witnessed a flourish of hands-on cybersecurity labs and competitions. The information technology (IT) education community has recognized their significant role in boosting students' interest in security and enhancing their security knowledge and skills. Compared to the focus on individual based education materials, much less attention has been paid to the development of tools and materials suitable for team-based security practices, which, however, prevail in real-world environments. One major bottleneck is lack of suitable platforms for this type of practices in IT education community. In this paper, we propose a low-cost, team-oriented cybersecurity practice platform called Platoon. The Platoon platform allows for quickly and automatically creating one or more virtual networks that mimic real-world corporate networks using a regular computer. The virtual environment created by Platoon is suitable for both cybersecurity labs, competitions, and projects. The performance data and user feedback collected from our cyber-defense exercises indicate that Platoon is practical and useful for enhancing students' security learning outcomes.
Poster presented at the 2017 Science of Security UIUC Lablet Summer Internship Poster Session held on July 27, 2017 in Urbana, IL.
Recent years have seen an exponential growth of the collection and processing of data from heterogeneous sources for a variety of purposes. Several methods and techniques have been proposed to transform and fuse data into "useful" information. However, the security aspects concerning the fusion of sensitive data are often overlooked. This paper investigates the problem of data fusion and derived data control. In particular, we identify the requirements for regulating the fusion process and eliciting restrictions on the access and usage of derived data. Based on these requirements, we propose an attribute-based policy framework to control the fusion of data from different information sources and under the control of different authorities. The framework comprises two types of policies: access control policies, which define the authorizations governing the resources used in the fusion process, and fusion policies, which define constraints on allowed fusion processes. We also discuss how such policies can be obtained for derived data.
One way of evaluating the reusability of a test collection is to determine whether removing the unique contributions of some system would alter the preference order between that system and others. Rank correlation measures such as Kendall's tau are often used for this purpose. Rank correlation measures are appropriate for ordinal measures in which only preference order is important, but many evaluation measures produce system scores in which both the preference order and the magnitude of the score difference are important. Such measures are referred to as interval. Pearson's rho offers one way in which correlation can be computed over results from an interval measure such that smaller errors in the gap size are preferred. When seeking to improve over existing systems, we care the most about comparisons among the best systems. For that purpose we prefer head-weighed measures such as tau\_AP, which is designed for ordinal data. No present head weighted measure fully leverages the information present in interval effectiveness measures. This paper introduces such a measure, referred to as Pearson Rank.
Screen lock is vulnerable against shoulder surfing since password, personal identification numbers (PIN) and pattern can be seen when smart phones are used in public space although important information is stored in them and they are often used in public space. In this paper, we propose a new method in which passwords are combined with biometrics authentication which cannot be seen by shoulder surfing and difficult to be guessed by brute-force attacks. In our method, the motion of a finger is measured by sensors when a user controls a mobile terminal, and the motion which includes characteristics of the user is registered. In our method, registered characteristics are classified by learning with self-organizing maps. Users are identified by referring the self-organizing maps when they input passwords on mobile terminals.
This paper presents a new type of online password guessing attack called "WiPING" (Wi-Fi signal-based PIN Guessing attack) to guess a victim's PIN (Personal Identification Number) within a small number of unlock attempts. WiPING uses wireless signal patterns identified from observing sequential finger movements involved in typing a PIN to unlock a mobile device. A list of possible PIN candidates is generated from the wireless signal patterns, and is used to improve performance of PIN guessing attacks. We implemented a proof-of-concept attack to demonstrate the feasibility of WiPING. Our results showed that WiPING could be practically effective: while pure guessing attacks failed to guess all 20 PINs, WiPING successfully guessed two PINs.
Multi-core is widely used for mobile devices due to high performance and good energy efficiency. For maintaining cores' cache coherency, mobile multi-core integrated new hardware ARM CCI. In this study, we focus on the security aspect of mobile multi-core. We monitor cache coherency operations that occur among PSL related processes' inter-core communication. After simple analysis, we can sneak android PSL information. Some preliminary results show that we could efficiently identify PSL pattern. This is a significant security violation in terms of confidentiality. In addition, mobile multi-cores are already prevalent, the attack is practical, and it can be easily spread.
Given "who-trusts/distrusts-whom" information, how can we propagate the trust and distrust? With the appearance of fraudsters in social network sites, the importance of trust prediction has increased. Most such methods use only explicit and implicit trust information (e.g., if Smith likes several of Johnson's reviews, then Smith implicitly trusts Johnson), but they do not consider distrust. In this paper, we propose PIN-TRUST, a novel method to handle all three types of interaction information: explicit trust, implicit trust, and explicit distrust. The novelties of our method are the following: (a) it is carefully designed, to take into account positive, implicit, and negative information, (b) it is scalable (i.e., linear on the input size), (c) most importantly, it is effective and accurate. Our extensive experiments with a real dataset, Epinions.com data, of 100K nodes and 1M edges, confirm that PIN-TRUST is scalable and outperforms existing methods in terms of prediction accuracy, achieving up to 50.4 percentage relative improvement.
With the massive amounts of data available today, it is common to store and process data using multiple machines. Parallel programming platforms such as MapReduce and its variants are popular frameworks for handling such large data. We present the first provably efficient algorithms to compute, store, and query data structures for range queries and approximate nearest neighbor queries in a popular parallel computing abstraction that captures the salient features of MapReduce and other massively parallel communication (MPC) models. In particular, we describe algorithms for \$kd\$-trees, range trees, and BBD-trees that only require O(1) rounds of communication for both preprocessing and querying while staying competitive in terms of running time and workload to their classical counterparts. Our algorithms are randomized, but they can be made deterministic at some increase in their running time and workload while keeping the number of rounds of communication to be constant.
The paper focuses on one of the methods of designing a highly-automated hardware-software complex aimed at controlling the security of power grids and units that support both central heating and power systems of smart cities. We understand this condition as a situation when any energy consumers of smart cities will be provided with necessary for their living amounts of energy and fuel at any time, including possible periods of techno genic and natural hazards. Two main scientific principles lie in the base of the approach introduced. The first one is diversification of risks of energy security of smart cities by rational choosing the different energy generation sources ratio for fuel-energy balance of a smart city, including large fuel electric power plants and small power autonomous generators. For example, they can be wind energy machinery of sun collectors, heat pipes, etc. The second principle is energy efficiency and energy saving of smart cities. In our case this principle is realized by the high level of automation of monitoring and operation of security status of energy systems and complexes that provide the consumers of smart cities with heat, hot water and electricity, as well as by preventive alert of possible emergencies and high reliability of functioning of all energy facilities. We formulate the main principle governing the construction of a smart hardware-software complex used to maintain a highly-automated control over risks connected with functioning of both power sources and transmission grids. This principle is for open block architecture, including highly autonomous block-modules of primary registration of measuring information, data analysis and systems of automated operation. It also describes general IT-tools used to control the risks of supplying smart cities with energy and shows the structure of a highly-automated system designed to select technological and managerial solutions for a smart city's energy supply system.
Kernel rootkits often hide associated malicious processes by altering reported task struct information to upper layers and applications such as ps and top. Virtualized settings offer a unique opportunity to mitigate this behavior using dynamic virtual machine introspection (VMI). For known kernels, VMI can be deployed to search for kernel objects and identify them by using unique data structure "signatures". In existing work, VMI-detected data structure signatures are based on values and structural features which must be (often exactly) present in memory snapshots taken, for accurate detection. This features a certain brittleness and rootkits can escape detection by simply temporarily "un-tangling" the corresponding structures when not running. Here we introduce a new paradigm, that defeats such behavior by training for and observing signatures of timing access patterns to any and all kernel-mapped data regions, including objects that are not directly linked in the "official" list of tasks. The use of timing information in training detection signatures renders the defenses resistant to attacks that try to evade detection by removing their corresponding malicious processes before scans. KXRay successfully detected processes hidden by four traditional rootkits.
We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC's key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 32k samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.
The advancing of reverse engineering techniques has complicated the efforts in intellectual property protection. Proactive methods have been developed recently, among which layout-level IC camouflaging is the leading example. However, existing camouflaging methods are rarely supported by provably secure criteria, which further leads to over-estimation of the security level when countering the latest de-camouflaging attacks, e.g., the SAT-based attack. In this paper, a quantitative security criterion is proposed for de-camouflaging complexity measurements and formally analyzed through the demonstration of the equivalence between the existing de-camouflaging strategy and the active learning scheme. Supported by the new security criterion, two novel camouflaging techniques are proposed, the low-overhead camouflaging cell library and the AND-tree structure, to help achieve exponentially increasing security levels at the cost of linearly increasing performance overhead on the circuit under protection. A provably secure camouflaging framework is then developed by combining these two techniques. Experimental results using the security criterion show that the camouflaged circuits with the proposed framework are of high resilience against the SAT-based attack with negligible performance overhead.
We present the first complexity-theoretic secure steganographic protocol which, for any communication channel, is provably secure, reliable, and has nearly optimal bandwidth. Our system is unconditionally secure, i.e. our proof does not rely on any unproven complexity-theoretic assumption, like e.g. the existence of one-way functions. This disproves the claim that the existence of one-way functions and access to a communication channel oracle are both necessary and sufficient conditions for the existence of secure steganography, in the sense that secure and reliable steganography exists independently of the existence of one-way functions.
Anonymous authentication allows one to authenticate herself without revealing her identity, and becomes an important technique for constructing privacy-preserving Internet connections. Anonymous password authentication is highly desirable as it enables a client to authenticate herself by a human-memorable password while preserving her privacy. In this paper, we introduce a novel approach for designing anonymous password-authenticated key exchange (APAKE) protocols using algebraic message authentication codes (MACs), where an algebraic MAC wrapped by a password is used by a client for anonymous authentication, and a server issues algebraic MACs to clients and acts as the verifier of login protocols. Our APAKE construction is secure provided that the algebraic MAC is strongly existentially unforgeable under random message and chosen verification queries attack (suf-rmva), weak pseudorandom and tag-randomization simulatable, and has simulation-sound extractable non-interactive zero-knowledge proofs (SE-NIZKs). To design practical APAKE protocols, we instantiate an algebraic MAC based on the q-SDH assumption which satisfies all the required properties, and construct credential presentation algorithms for the MAC which have optimal efficiency for a randomize-then-prove paradigm. Based on the algebraic MAC, we instantiate a highly practical APAKE protocol and denote it by APAKE, which is much more efficient than the mechanisms specified by ISO/IEC 20009-4. An efficient revocation mechanism for APAKE is also proposed.
We integrate APAKE into TLS to present an anonymous client authentication mode where clients holding passwords can authenticate themselves to a server anonymously. Our implementation with 128-bit security shows that the average connection time of APAKE-based ciphersuite is 2.8 ms. With APAKE integrated into the OpenSSL library and using an Apache web server on a 2-core desktop computer, we could serve 953 ECDHE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KB payload. Compared to ECDSA-signed elliptic curve Diffie-Hellman ciphersuite with mutual authentication, this means a 0.27 KB increased handshake size and a 13% reduction in throughput.
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption/encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, Popa et al. (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as Naveed et al. (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0(1-e)) search queries. This improved security and performance makes our scheme better suited for today's insert-heavy databases.
Aiming to reduce the cost and complexity of maintaining networking infrastructures, organizations are increasingly outsourcing their network functions (e.g., firewalls, traffic shapers and intrusion detection systems) to the cloud, and a number of industrial players have started to offer network function virtualization (NFV)-based solutions. Alas, outsourcing network functions in its current setting implies that sensitive network policies, such as firewall rules, are revealed to the cloud provider. In this paper, we investigate the use of cryptographic primitives for processing outsourced network functions, so that the provider does not learn any sensitive information. More specifically, we present a cryptographic treatment of privacy-preserving outsourcing of network functions, introducing security definitions as well as an abstract model of generic network functions, and then propose a few instantiations using partial homomorphic encryption and public-key encryption with keyword search. We include a proof-of-concept implementation of our constructions and show that network functions can be privately processed by an untrusted cloud provider in a few milliseconds.
TCP congestion control has been known for its crucial role in stabilizing the Internet and preventing congestion collapses. However, with the rapid advancement in networking technologies, resulting in the emergence of challenging network environments such as data center networks (DCNs), the traditional TCP algorithm leads to several impairments. The shortcomings of TCP when deployed in DCNs have motivated the development of multiple new variants, including DCTCP, ICTCP, IA-TCP, and D2TCP, but all of these algorithms exhibit their advantages at the cost of a number of drawbacks in the Global Internet. Motivated by the belief that new innovations need to be established on top of a solid foundation with a thorough understanding of the existing, well-established algorithms, we have been working towards a comprehensive analysis of various conventional TCP algorithms in DCNs and other modern networks. This paper presents our first milestone towards the completion of our comparative study in which we present the results obtained by simulating multiple TCP variants: NewReno, Vegas, HighSpeed, Scalable, Westwood+, BIC, CUBIC, and YeAH using a fat tree architecture. Each protocol is evaluated in terms of queue length, number of dropped packets, average packet delay, and aggregate bandwidth as a percentage of the channel bandwidth.
The existence of and market for notebooks designedfor users to write down passwords illuminates a sharp contrast: what is often prescribed as proper password behavior—e.g., never write down passwords—differs from what many users actually do. These password logbooks and their reviews provide many unique and surprising insights into their users’ beliefs, motivations, and behaviors. We examine the password logbooks and analyze, using grounded theory, their reviews, to better understand how these users think and behave with respectto password authentication. Several themes emerge including: previous password management strategies, gifting, organizational strategies, password sharing, and dubious security advice. Some users argue these books enhance security.
Unease over data privacy will retard consumer acceptance of IoT deployments. The primary source of discomfort is a lack of user control over raw data that is streamed directly from sensors to the cloud. This is a direct consequence of the over-centralization of today's cloud-based IoT hub designs. We propose a solution that interposes a locally-controlled software component called a privacy mediator on every raw sensor stream. Each mediator is in the same administrative domain as the sensors whose data is being collected, and dynamically enforces the current privacy policies of the owners of the sensors or mobile users within the domain. This solution necessitates a logical point of presence for mediators within the administrative boundaries of each organization. Such points of presence are provided by cloudlets, which are small locally-administered data centers at the edge of the Internet that can support code mobility. The use of cloudlet-based mediators aligns well with natural personal and organizational boundaries of trust and responsibility.