Biblio
To prevent unauthorized access to adversaries, strong authentication scheme is a vital security requirement in client-server inter-networking systems. These schemes must verify the legitimacy of such users in real-time environments and establish a dynamic session key fur subsequent communication. Of late, T. H. Chen and J. C. Huang proposed a two-factor authentication framework claiming that the scheme is secure against most of the existing attacks. However we have shown that Chen and Huang scheme have many critical weaknesses in real-time environments. The scheme is prone to man in the middle attack and information leakage attack. Furthermore, the scheme does not provide two essential security services such user anonymity and session key establishment. In this paper, we present an enhanced user participating authenticating scheme which overcomes all the weaknesses of Chen et al.'s scheme and provide most of the essential security features.
In this paper, we present the enhancement of a lightweight key-policy attribute-based encryption (KP-ABE) scheme designed for the Internet of Things (IoT). The KP-ABE scheme was claimed to achieve ciphertext indistinguishability under chosen-plaintext attack in the selective-set model but we show that the KP-ABE scheme is insecure even in the weaker security notion, namely, one-way encryption under the same attack and model. In particular, we show that an attacker can decrypt a ciphertext which does not satisfy the policy imposed on his decryption key. Subsequently, we propose an efficient fix to the KP-ABE scheme as well as extending it to be a hierarchical KP-ABE (H-KP-ABE) scheme that can support role delegation in IoT applications. An example of applying our H-KP-ABE on an IoT-connected healthcare system is given to highlight the benefit of the delegation feature. Lastly, using the NIST curves secp192k1 and secp256k1, we benchmark the fixed (hierarchical) KP-ABE scheme on an Android phone and the result shows that the scheme is still the fastest in the literature.
The current paper proposes a method to combine the theoretical concepts of the parallel processing created by the DNA computing and GA environments, with the effectiveness novel mechanism of the distinction and discover of the cryptosystem keys. Three-level contributions to the current work, the first is the adoption of a final key sequence mechanism by the principle of interconnected sequence parts, the second to exploit the principle of the parallel that provides GA in the search for the counter value of the sequences of the challenge to the mechanism of the discrimination, the third, the most important and broadening the breaking of the cipher, is the harmony of the principle of the parallelism that has found via the DNA computing to discover the basic encryption key. The proposed method constructs a combined set of files includes binary sequences produced from substitution of the guess attributes of the binary equations system of the cryptosystem, as well as generating files that include all the prospects of the DNA strands for all successive cipher characters, the way to process these files to be obtained from the first character file, where extract a key sequence of each sequence from mentioned file and processed with the binary sequences that mentioned the counter produced from GA. The aim of the paper is exploitation and implementation the theoretical principles of the parallelism that providing via biological environment with the new sequences recognition mechanism in the cryptanalysis.
A key exchange protocol is an important primitive in the field of information and network security and is used to exchange a common secret key among various parties. A number of key exchange protocols exist in the literature and most of them are based on the Diffie-Hellman (DH) problem. But, these DH type protocols cannot resist to the modern computing technologies like quantum computing, grid computing etc. Therefore, a more powerful non-DH type key exchange protocol is required which could resist the quantum and exponential attacks. In the year 2013, Lei and Liao, thus proposed a lattice-based key exchange protocol. Their protocol was related to the NTRU-ENCRYPT and NTRU-SIGN and so, was referred as NTRU-KE. In this paper, we identify that NTRU-KE lacks the authentication mechanism and suffers from the man-in-the-middle (MITM) attack. This attack may lead to the forging the authenticated users and exchanging the wrong key.
We present attacks that use only the volume of responses to range queries to reconstruct databases. Our focus is on practical attacks that work for large-scale databases with many values and records, without requiring assumptions on the data or query distributions. Our work improves on the previous state-of-the-art due to Kellaris et al. (CCS 2016) in all of these dimensions. Our main attack targets reconstruction of database counts and involves a novel graph-theoretic approach. It generally succeeds when R , the number of records, exceeds \$N2/2\$, where N is the number of possible values in the database. For a uniform query distribution, we show that it requires volume leakage from only O(N2 łog N) queries (cf. O(N4łog N) in prior work). We present two ancillary attacks. The first identifies the value of a new item added to a database using the volume leakage from fresh queries, in the setting where the adversary knows or has previously recovered the database counts. The second shows how to efficiently recover the ranges involved in queries in an online fashion, given an auxiliary distribution describing the database. Our attacks are all backed with mathematical analyses and extensive simulations using real data.
Quantum information exchange computer emulator is presented, which takes into consideration imperfections of real quantum channel such as noise and attenuation resulting in the necessity to increase number of photons in the impulse. The Qt Creator C++ program package provides evaluation of the ability to detect unauthorized access as well as an amount of information intercepted by intruder.
Recently, Jung et al. [1] proposed a data access privilege scheme and claimed that their scheme addresses data and identity privacy as well as multi-authority, and provides data access privilege for attribute-based encryption. In this paper, we show that this scheme, and also its former and latest versions (i.e. [2] and [3] respectively) suffer from a number of weaknesses in terms of finegrained access control, users and authorities collusion attack, user authorization, and user anonymity protection. We then propose our new scheme that overcomes these shortcomings. We also prove the security of our scheme against user collusion attacks, authority collusion attacks and chosen plaintext attacks. Lastly, we show that the efficiency of our scheme is comparable with existing related schemes.
A strong designated verifier signature (SDVS) is a variation of traditional digital signatures, since it allows a signer to designate an intended receiver as the verifier rather than anyone. To do this, a signer must incorporate the verifier's public key with the signing procedure such that only the intended receiver could verify this signature with his/her private key. Such a signature further enables a designated verifier to simulate a computationally indistinguishable transcript intended for himself. Consequently, no one can identify the real signer's identity from a candidate signer and a designated verifier, which is referred to as the property of signer ambiguity. A strong notion of signer ambiguity states that no polynomial-time adversary can distinguish the real signer of a given SDVS that is not received by the designated verifier, even if the adversary has obtained the signer's private key. In 2013, Islam and Biswas proposed a provably secure certificateless strong designated verifier signature (CL-SDVS) scheme based on bilinear pairings. In this paper, we will demonstrate that their scheme fails to satisfy strong signer ambiguity and must assume a trusted private key generator (PKG). In other words, their CL-SDVS scheme is vulnerable to both key-compromise and malicious PKG attacks. Additionally, we present an improved variant to eliminate these weaknesses.
Cloud storage can provide outsourcing data services for both organizations and individuals. However, cloud storage still faces many challenges, e.g., public integrity auditing, the support of dynamic data, and low computational audit cost. To solve the problems, a number of techniques have been proposed. Recently, Tian et al. proposed a novel public auditing scheme for secure cloud storage based on a new data structure DHT. The authors claimed that their scheme was proven to be secure. Unfortunately, through our security analysis, we find that the scheme suffers from one attack and one security shortage. The attack is that an adversary can forge the data to destroy the correctness of files without being detected. The shortage of the scheme is that the updating operations for data blocks is vulnerable and easy to be modified. Finally, we give our countermeasures to remedy the security problems.
The chaotic system and cryptography have some common features. Due to the close relationship between chaotic system and cryptosystem, researchers try to combine the chaotic system with cryptosystem. In this study, security analysis of an encryption algorithm which aims to encrypt the data with ECG signals and chaotic functions was performed using the Logistic map in text encryption and Henon map in image encryption. In the proposed algorithm, text and image data can be encrypted at the same time. In addition, ECG signals are used to determine the initial conditions and control parameters of the chaotic functions used in the algorithm to personalize of the encryption algorithm. In this cryptanalysis study, the inadequacy of the mentioned process and the weaknesses of the proposed method have been determined. Encryption algorithm has not sufficient capacity to provide necessary security level of key space and secret key can be obtained with only one plaintext/ciphertext pair with chosen-plaintext attack.
π-Cipher is one of the twenty-nine candidates in the second round of the CAESAR competition for authenticated ciphers. π-Cipher uses a parallel sponge construction, based upon an ARX permutation. This work shows several state recovery attacks, on up to three rounds. These attacks use known values in the function's bitrate, combined with values found through exhaustive search, to retrieve the remaining values in the internal state. These attacks can break one round, for any variant of π-Cipher, in negligible time. They can also break two or three rounds much faster than exhaustive search on the key, for some variants. However, these attacks only work against version 1 of π-Cipher, due to the differences in the padding function for version 2.0. To fill this gap, this work also includes a one round attack against version 2.0, building upon the distinguisher present in the π-Cipher submission document.
Physical layer security for wireless communication is broadly considered as a promising approach to protect data confidentiality against eavesdroppers. However, despite its ample theoretical foundation, the transition to practical implementations of physical-layer security still lacks success. A close inspection of proven vulnerable physical-layer security designs reveals that the flaws are usually overlooked when the scheme is only evaluated against an inferior, single-antenna eavesdropper. Meanwhile, the attacks exposing vulnerabilities often lack theoretical justification. To reduce the gap between theory and practice, we posit that a physical-layer security scheme must be studied under multiple adversarial models to fully grasp its security strength. In this regard, we evaluate a specific physical-layer security scheme, i.e. orthogonal blinding, under multiple eavesdropper settings. We further propose a practical "ciphertext-only attack" that allows eavesdroppers to recover the original message by exploiting the low entropy fields in wireless packets. By means of simulation, we are able to reduce the symbol error rate at an eavesdropper below 1% using only the eavesdropper's receiving data and a general knowledge about the format of the wireless packets.
EPC Gen2 tags are working as international RFID standards for the use in the supply chain worldwide, such tags are computationally weak devices and unable to perform even basic symmetric-key cryptographic operations. For this reason, to implement robust and secure pseudo-random number generators (PRNG) is a challenging issue for low-cost Radio-frequency identification (RFID) tags. In this paper, we study the security of LFSR-based PRNG implemented on EPC Gen2 tags and exploit LFSR-based PRNG to provide a better constructions. We provide a cryptanalysis against the J3Gen which is LFSR-based PRNG and proposed by Sugei et al. [1], [2] for EPC Gen2 tags using distinguish attack and make observations on its input using NIST randomness test. We also test the PRNG in EPC Gen2 RFID Tags by using the NIST SP800-22. As a counter-measure, we propose two modified models based on the security analysis results. We show that our results perform better than J3Gen in terms of computational and statistical property.
The ownership transfer of RFID tag means a tagged product changes control over the supply chain. Recently, Doss et al. proposed two secure RFID tag ownership transfer (RFID-OT) protocols based on quadratic residues. However, we find that they are vulnerable to the desynchronization attack. The attack is probabilistic. As the parameters in the protocols are adopted, the successful probability is 93.75%. We also show that the use of the pseudonym of the tag h(TID) and the new secret key KTID are not feasible. In order to solve these problems, we propose the improved schemes. Security analysis shows that the new protocols can resist in the desynchronization attack and other attacks. By optimizing the performance of the new protocols, it is more practical and feasible in the large-scale deployment of RFID tags.
The Center for Strategic and International Studies estimates the annual cost from cyber crime to be more than \$400 billion. Most notable is the recent digital identity thefts that compromised millions of accounts. These attacks emphasize the security problems of using clonable static information. One possible solution is the use of a physical device known as a Physically Unclonable Function (PUF). PUFs can be used to create encryption keys, generate random numbers, or authenticate devices. While the concept shows promise, current PUF implementations are inherently problematic: inconsistent behavior, expensive, susceptible to modeling attacks, and permanent. Therefore, we propose a new solution by which an unclonable, dynamic digital identity is created between two communication endpoints such as mobile devices. This Physically Unclonable Digital ID (PUDID) is created by injecting a data scrambling PUF device at the data origin point that corresponds to a unique and matching descrambler/hardware authentication at the receiving end. This device is designed using macroscopic, intentional anomalies, making them inexpensive to produce. PUDID is resistant to cryptanalysis due to the separation of the challenge response pair and a series of hash functions. PUDID is also unique in that by combining the PUF device identity with a dynamic human identity, we can create true two-factor authentication. We also propose an alternative solution that eliminates the need for a PUF mechanism altogether by combining tamper resistant capabilities with a series of hash functions. This tamper resistant device, referred to as a Quasi-PUDID (Q-PUDID), modifies input data, using a black-box mechanism, in an unpredictable way. By mimicking PUF attributes, Q-PUDID is able to avoid traditional PUF challenges thereby providing high-performing physical identity assurance with or without a low performing PUF mechanism. Three different application scenarios with mobile devices for PUDID and Q-PUDI- have been analyzed to show their unique advantages over traditional PUFs and outline the potential for placement in a host of applications.