Biblio
For authenticating time critical broadcast messages, IEEE 1609.2 security standard for Vehicular Ad hoc Networks (VANETs) suggests the use of secure Elliptic Curve Digital Signature Algorithm (ECDSA). Since ECDSA has an expensive verification in terms of time, most commonly suggested alternate algorithms are TESLA and signature amortization. Unfortunately, these algorithms lack immediate authentication and non-repudiation. Therefore, we introduce a probabilistic verification scheme for an ECDSA-based authentication protocol. Using ns2 simulation tools, we compare the performance of all above-mentioned broadcast authentication algorithms. The results show with our proposed scheme, there is an increase in packet processed ratio over that of all the other algorithms.
A nonrepudiation protocol from party S to party R performs two tasks. First, the protocol enables party S to send to party R some text x along with a proof (that can convince a judge) that x was indeed sent by S. Second, the protocol enables party R to receive text x from S and to send to S a proof (that can convince a judge) that x was indeed received by R. A nonrepudiation protocol from one party to another is called two-phase iff the two parties execute the protocol as specified until one of the two parties receives its complete proof. Then and only then does this party refrain from sending any message specified by the protocol because these messages only help the other party complete its proof. In this paper, we present methods for specifying and verifying two-phase nonrepudiation protocols.
A nonrepudiation protocol from a sender S to a set of potential receivers \R1, R2, ..., Rn\ performs two functions. First, this protocol enables S to send to every potential receiver Ri a copy of file F along with a proof that can convince an unbiased judge that F was indeed sent by S to Ri. Second, this protocol also enables each Ri to receive from S a copy of file F and to send back to S a proof that can convince an unbiased judge that F was indeed received by Ri from S. When a nonrepudiation protocol from S to \R1, R2, ..., Rn\ is implemented in a cloud system, the communications between S and the set of potential receivers \R1, R2, ..., Rn\ are not carried out directly. Rather, these communications are carried out through a cloud C. In this paper, we present a nonrepudiation protocol that is implemented in a cloud system and show that this protocol is correct. We also show that this protocol has two clear advantages over nonrepudiation protocols that are not implemented in cloud systems.
Particle Swarm Optimization (PSO) has been shown to perform very well on a wide range of optimization problems. One of the drawbacks to PSO is that the base algorithm assumes continuous variables. In this paper, we present a version of PSO that is able to optimize over discrete variables. This new PSO algorithm, which we call Integer and Categorical PSO (ICPSO), incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions. In this paper, we describe our new algorithm and compare its performance against other discrete PSO algorithms. In our experiments, we demonstrate that our algorithm outperforms comparable methods on both discrete benchmark functions and NK landscapes, a mathematical framework that generates tunable fitness landscapes for evaluating EAs.
Non-malleability is an important and intensively studied security notion for many cryptographic primitives. In the context of public key encryption, this notion means it is infeasible for an adversary to transform an encryption of some message m into one of a related message m' under the given public key. Although it has provided a strong security property for many applications, it still does not suffice for some scenarios like the system where the users could issue keys on-the-fly. In such settings, the adversary may have the power to transform the given public key and the ciphertext. To withstand such attacks, Fischlin introduced a stronger notion, known as complete non-malleability, which requires that the non-malleability property be preserved even for the adversaries attempting to produce a ciphertext of some related message under the transformed public key. To date, many schemes satisfying this stronger security have been proposed, but they are either inefficient or proved secure in the random oracle model. In this work, we put forward a new encryption scheme in the common reference string model. Based on the standard DBDH assumption, the proposed scheme is proved completely non-malleable secure against adaptive chosen ciphertext attacks in the standard model. In our scheme, the well-formed public keys and ciphertexts could be publicly recognized without drawing support from unwieldy techniques like non-interactive zero knowledge proofs or one-time signatures, thus achieving a better performance.
In cloud computing, computationally weak users are always willing to outsource costly computations to a cloud, and at the same time they need to check the correctness of the result provided by the cloud. Such activities motivate the occurrence of verifiable computation (VC). Recently, Parno, Raykova and Vaikuntanathan showed any VC protocol can be constructed from an attribute-based encryption (ABE) scheme for a same class of functions. In this paper, we propose two practical and efficient semi-adaptively secure key-policy attribute-based encryption (KP-ABE) schemes with constant-size ciphertexts. The semi-adaptive security requires that the adversary designates the challenge attribute set after it receives public parameters but before it issues any secret key query, which is stronger than selective security guarantee. Our first construction deals with small universe while the second one supports large universe. Both constructions employ the technique underlying the prime-order instantiation of nested dual system groups, which are based on the \$d\$-linear assumption including SXDH and DLIN assumptions. In order to evaluate the performance, we implement our ABE schemes using \$\textbackslashtextsf\Python\\$ language in Charm. Compared with previous KP-ABE schemes with constant-size ciphertexts, our constructions achieve shorter ciphertext and secret key sizes, and require low computation costs, especially under the SXDH assumption.
BGP is known to have many security vulnerabilities due to the very nature of its underlying assumptions of trust among independently operated networks. Most prior efforts have focused on attacks that can be addressed using traditional cryptographic techniques to ensure authentication or integrity, e.g., BGPSec and related works. Although augmenting BGP with authentication and integrity mechanisms is critical, they are, by design, far from sufficient to prevent attacks based on manipulating the complex BGP protocol itself. In this paper, we identify two serious attacks on two of the most fundamental goals of BGP—to ensure reachability and to enable ASes to pick routes available to them according to their routing policies—even in the presence of BGPSec-like mechanisms. Our key contributions are to 1 formalize a series of critical security properties, 2 experimentally validate using commodity router implementations that BGP fails to achieve those properties, 3 quantify the extent of these vulnerabilities in the Internet's AS topology, and 4 propose simple modifications to provably ensure that those properties are satisfied. Our experiments show that, using our attacks, a single malicious AS can cause thousands of other ASes to become disconnected from thousands of other ASes for arbitrarily long, while our suggested modifications almost completely eliminate such attacks.
Software Defined Internet Exchange Points (SDXes) increase the flexibility of interdomain traffic delivery on the Internet. Yet, an SDX inherently requires multiple participants to have access to a single, shared physical switch, which creates the need for an authorization mechanism to mediate this access. In this paper, we introduce a logic and mechanism called FLANC (A Formal Logic for Authorizing Network Control), which authorizes each participant to control forwarding actions on a shared switch and also allows participants to delegate forwarding actions to other participants at the switch (e.g., a trusted third party). FLANC extends "says" and "speaks for" logic that have been previously designed for operating system objects to handle expressions involving network traffic flows. We describe FLANC, explain how participants can use it to express authorization policies for realistic interdomain routing settings, and demonstrate that it is efficient enough to operate in operational settings.
Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.
The demand for trained cybersecurity operators is growing more quickly than traditional programs in higher education can fill. At the same time, unemployment for returning military veterans has become a nationally discussed problem. We describe the design and launch of New Skills for a New Fight (NSNF), an intensive, one-year program to train military veterans for the cybersecurity field. This non-traditional program, which leverages experience that veterans gained in military service, includes recruitment and selection, a base of knowledge in the form of four university courses in a simultaneous cohort mode, a period of hands-on cybersecurity training, industry certifications and a practical internship in a Security Operations Center (SOC). Twenty veterans entered this pilot program in January of 2016, and will complete in less than a year's time. Initially funded by a global financial services company, the program provides veterans with an expense-free preparation for an entry-level cybersecurity job.
Advanced cyber attacks consist of multiple stages aimed at being stealthy and elusive. Such attack patterns leave their footprints spatio-temporally dispersed across many different logs in victim machines. However, existing log-mining intrusion analysis systems typically target only a single type of log to discover evidence of an attack and therefore fail to exploit fundamental inter-log connections. The output of such single-log analysis can hardly reveal the complete attack story for complex, multi-stage attacks. Additionally, some existing approaches require heavyweight system instrumentation, which makes them impractical to deploy in real production environments. To address these problems, we present HERCULE, an automated multi-stage log-based intrusion analysis system. Inspired by graph analytics research in social network analysis, we model multi-stage intrusion analysis as a community discovery problem. HERCULE builds multi-dimensional weighted graphs by correlating log entries across multiple lightweight logs that are readily available on commodity systems. From these, HERCULE discovers any "attack communities" embedded within the graphs. Our evaluation with 15 well known APT attack families demonstrates that HERCULE can reconstruct attack behaviors from a spectrum of cyber attacks that involve multiple stages with high accuracy and low false positive rates.
We propose a new voting scheme, BeleniosRF, that offers both receipt-freeness and end-to-end verifiability. It is receipt-free in a strong sense, meaning that even dishonest voters cannot prove how they voted. We provide a game-based definition of receipt-freeness for voting protocols with non-interactive ballot casting, which we name strong receipt-freeness (sRF). To our knowledge, sRF is the first game-based definition of receipt-freeness in the literature, and it has the merit of being particularly concise and simple. Built upon the Helios protocol, BeleniosRF inherits its simplicity and does not require any anti-coercion strategy from the voters. We implement BeleniosRF and show its feasibility on a number of platforms, including desktop computers and smartphones.
Mobile devices store a diverse set of private user data and have gradually become a hub to control users' other personal Internet-of-Things devices. Access control on mobile devices is therefore highly important. The widely accepted solution is to protect access by asking for a password. However, password authentication is tedious, e.g., a user needs to input a password every time she wants to use the device. Moreover, existing biometrics such as face, fingerprint, and touch behaviors are vulnerable to forgery attacks. We propose a new touch-based biometric authentication system that is passive and secure against forgery attacks. In our touch-based authentication, a user's touch behaviors are a function of some random "secret". The user can subconsciously know the secret while touching the device's screen. However, an attacker cannot know the secret at the time of attack, which makes it challenging to perform forgery attacks even if the attacker has already obtained the user's touch behaviors. We evaluate our touch-based authentication system by collecting data from 25 subjects. Results are promising: the random secrets do not influence user experience and, for targeted forgery attacks, our system achieves 0.18 smaller Equal Error Rates (EERs) than previous touch-based authentication.
Now a days, ATM is used for money transaction for the convenience of the user by providing round the clock 24*7 services in financial transaction. Bank provides the Debit or Credit card to its user along with particular PIN number (which is only known by the Bank and User). Sometimes, user's card may be stolen by someone and this person can access all confidential information as Credit card number, Card holder name, Expiry date and CVV number through which he/she can complete fake transaction. In this paper, we introduced the biometric encryption of "EYE RETINA" to enhance the security over the wireless and unreliable network as internet. In this method user can authorizeasthird person his/her behalf to make the transaction using Debit or Credit card. In proposed method, third person can also perform financial transaction by providing his/her eye retina for the authorization & identification purpose.
Recently, proactive systems such as Google Now and Microsoft Cortana have become increasingly popular in reforming the way users access information on mobile devices. In these systems, relevant content is presented to users based on their context without a query in the form of information cards that do not require a click to satisfy the users. As a result, prior approaches based on clicks cannot provide reliable measurements of user satisfaction with such systems. It is also unclear how much of the previous findings regarding good abandonment with reactive Web searches can be applied to these proactive systems due to the intrinsic difference in user intent, the greater variety of content types and their presentations. In this paper, we present the first large-scale analysis of viewing behavior based on the viewport (the visible fraction of a Web page) of the mobile devices, towards measuring user satisfaction with the information cards of the mobile proactive systems. In particular, we identified and analyzed a variety of factors that may influence the viewing behavior, including biases from ranking positions, the types and attributes of the information cards, and the touch interactions with the mobile devices. We show that by modeling the various factors we can better measure user satisfaction with the mobile proactive systems, enabling stronger statistical power in large-scale online A/B testing.
One way of evaluating the reusability of a test collection is to determine whether removing the unique contributions of some system would alter the preference order between that system and others. Rank correlation measures such as Kendall's tau are often used for this purpose. Rank correlation measures are appropriate for ordinal measures in which only preference order is important, but many evaluation measures produce system scores in which both the preference order and the magnitude of the score difference are important. Such measures are referred to as interval. Pearson's rho offers one way in which correlation can be computed over results from an interval measure such that smaller errors in the gap size are preferred. When seeking to improve over existing systems, we care the most about comparisons among the best systems. For that purpose we prefer head-weighed measures such as tau\_AP, which is designed for ordinal data. No present head weighted measure fully leverages the information present in interval effectiveness measures. This paper introduces such a measure, referred to as Pearson Rank.
Cloud computing is rapidly reshaping the server administration landscape. The widespread use of virtualization and the increasingly high server consolidation ratios, in particular, have introduced unprecedented security challenges for users, increasing the exposure to intrusions and opening up new opportunities for attacks. Deploying security mechanisms in the hypervisor to detect and stop intrusion attempts is a promising strategy to address this problem. Existing hypervisor-based solutions, however, are typically limited to very specific classes of attacks and introduce exceedingly high performance overhead for production use. In this paper, we present Slick (Storage-Level Intrusion ChecKer), an intrusion detection system (IDS) for virtualized storage devices. Slick detects intrusion attempts by efficiently and transparently monitoring write accesses to critical regions on storage devices. The low-overhead monitoring component operates entirely inside the hypervisor, with no introspection or modifications required in the guest VMs. Using Slick, users can deploy generic IDS rules to detect a broad range of real-world intrusions in a flexible and practical way. Experimental results confirm that Slick is effective at enhancing the security of virtualized servers, while imposing less than 5% overhead in production.
We introduce OPTIK, a new practical design pattern for designing and implementing fast and scalable concurrent data structures. OPTIK relies on the commonly-used technique of version numbers for detecting conflicting concurrent operations. We show how to implement the OPTIK pattern using the novel concept of OPTIK locks. These locks enable the use of version numbers for implementing very efficient optimistic concurrent data structures. Existing state-of-the-art lock-based data structures acquire the lock and then check for conflicts. In contrast, with OPTIK locks, we merge the lock acquisition with the detection of conflicting concurrency in a single atomic step, similarly to lock-free algorithms. We illustrate the power of our OPTIK pattern and its implementation by introducing four new algorithms and by optimizing four state-of-the-art algorithms for linked lists, skip lists, hash tables, and queues. Our results show that concurrent data structures built using OPTIK are more scalable than the state of the art.
The rapid progress of multi-/many-core architectures has caused data-intensive parallel applications not yet be fully suited for getting the maximum performance. The advent of parallel programming frameworks offering structured patterns has alleviated developers' burden adapting such applications to parallel platforms. For example, the use of synchronization mechanisms in multithreaded applications is essential on shared-cache multi-core architectures. However, ensuring an appropriate use of their interfaces can be challenging, since different memory models plus instruction reordering at compiler/processor levels may influence the occurrence of data races. The benefits of race detectors are formidable in this sense, nevertheless if lock-free data structures with no high-level atomics are used, they may emit false positives. In this paper, we extend the ThreadSanitizer race detection tool in order to support semantics of the general Single-Producer/Single-Consumer (SPSC) lock-free parallel queue and to detect benign data races where it was correctly used. To perform our analysis, we leverage the FastFlow SPSC bounded lock-free queue implementation to test our extensions over a set of μ-benchmarks and real applications on a dual-socket Intel Xeon CPU E5-2695 platform. We demonstrate that this approach can reduce, on average, 30% the number of data race warning messages.
A MANET is a collection of self-configured node connected with wireless links. Each node of a mobile ad hoc network acts as a router and finds out a suitable route to forward a packet from source to destination. This network is applicable in areas where establishment of infrastructure is not possible, such as in the military environment. Along with the military environment MANET is also used in civilian environment such as sports stadiums, meeting room. The routing functionality of each node is cause of many security threats on routing. In this paper addressed the problem of identifying and isolating wormhole attack that refuse to forward packets in wireless mobile ad hoc network. The impact of this attack has been shown to be detrimental to network performance, lowering the packet delivery ratio and dramatically increasing the end-to-end delay. Proposed work suggested the efficient and secure routing in MANET. Using this approach of buffer length and RTT calculation, routing overhead minimizes. This research is based on detection and prevention of wormhole attacks in AODV. The proposed protocol is simulated using NS-2 and its performance is compared with the standard AODV protocol. The statistical analysis shows that modified AODV protocol detects wormhole attack efficiently and provides secure and optimum path for routing.
A group of wireless nodes forming a dynamic wireless network without any infrastructure is a MANET. As network is becoming an important technology for commercial and military based distributed applications, implementation of security over MANET has proved to be mandatory, as such networks are more vulnerable to attacks. When dealing with data transfer between the nodes in MANET, confidentiality and message integrity are the two important factors that need to be focused carefully. This paper proposes the implementation of a security algorithm over data transfer in Optimized Link State Routing protocol providing Trust Management in MANET by implementing confidentiality through Digital Signatures and Message Integrity through 256-bit strong AES cryptographic techniques using Openssl.
Due to the "curse of dimensionality" problem, it is very expensive to process the nearest neighbor (NN) query in high-dimensional spaces; and hence, approximate approaches, such as Locality-Sensitive Hashing (LSH), are widely used for their theoretical guarantees and empirical performance. Current LSH-based approaches target at the L1 and L2 spaces, while as shown in previous work, the fractional distance metrics (Lp metrics with 0 textless p textless 1) can provide more insightful results than the usual L1 and L2 metrics for data mining and multimedia applications. However, none of the existing work can support multiple fractional distance metrics using one index. In this paper, we propose LazyLSH that answers approximate nearest neighbor queries for multiple Lp metrics with theoretical guarantees. Different from previous LSH approaches which need to build one dedicated index for every query space, LazyLSH uses a single base index to support the computations in multiple Lp spaces, significantly reducing the maintenance overhead. Extensive experiments show that LazyLSH provides more accurate results for approximate kNN search under fractional distance metrics.
A standard model for Recommender Systems is the Matrix Completion setting: given partially known matrix of ratings given by users (rows) to items (columns), infer the unknown ratings. In the last decades, few attempts where done to handle that objective with Neural Networks, but recently an architecture based on Autoencoders proved to be a promising approach. In current paper, we enhanced that architecture (i) by using a loss function adapted to input data with missing values, and (ii) by incorporating side information. The experiments demonstrate that while side information only slightly improve the test error averaged on all users/items, it has more impact on cold users/items.
The Wikidata platform is a crowdsourced, structured knowledgebase aiming to provide integrated, free and language-agnostic facts which are–-amongst others–-used by Wikipedias. Users who actively enter, review and revise data on Wikidata are assisted by a property suggesting system which provides users with properties that might also be applicable to a given item. We argue that evaluating and subsequently improving this recommendation mechanism and hence, assisting users, can directly contribute to an even more integrated, consistent and extensive knowledge base serving a huge variety of applications. However, the quality and usefulness of such recommendations has not been evaluated yet. In this work, we provide the first evaluation of different approaches aiming to provide users with property recommendations in the process of curating information on Wikidata. We compare the approach currently facilitated on Wikidata with two state-of-the-art recommendation approaches stemming from the field of RDF recommender systems and collaborative information systems. Further, we also evaluate hybrid recommender systems combining these approaches. Our evaluations show that the current recommendation algorithm works well in regards to recall and precision, reaching a recall@7 of 79.71% and a precision@7 of 27.97%. We also find that generally, incorporating contextual as well as classifying information into the computation of property recommendations can further improve its performance significantly.
Motivated by applications in cryptography, we introduce and study the problem of distribution design. The goal of distribution design is to find a joint distribution on \$n\$ random variables that satisfies a given set of constraints on the marginal distributions. Each constraint can either require that two sequences of variables be identically distributed or, alternatively, that the two sequences have disjoint supports. We present several positive and negative results on the existence and efficiency of solutions for a given set of constraints. Distribution design can be seen as a strict generalization of several well-studied problems in cryptography. These include secret sharing, garbling schemes, and non-interactive protocols for secure multiparty computation. We further motivate the problem and our results by demonstrating their usefulness towards realizing non-interactive protocols for ad-hoc secure multiparty computation, in which any subset of the parties may choose to participate and the identity of the participants should remain hidden to the extent possible.